6 research outputs found

    Application of Link Integrity techniques from Hypermedia to the Semantic Web

    No full text
    As the Web of Linked Data expands it will become increasingly important to preserve data and links such that the data remains available and usable. In this work I present a method for locating linked data to preserve which functions even when the URI the user wishes to preserve does not resolve (i.e. is broken/not RDF) and an application for monitoring and preserving the data. This work is based upon the principle of adapting ideas from hypermedia link integrity in order to apply them to the Semantic Web

    All about that - a URI profiling tool for monitoring and preserving linked data

    No full text
    All About That (AAT) is a URI Profiling tool which allows users to monitor and preserve Linked Data in which they are interested. Its design is based upon the principle of adapting ideas from hypermedia link integrity in order to apply them to the Semantic Web. As the Linked Data Web expands it will become increasingly important to maintain links such that the data remains useful and therefore this tool is presented as a step towards providing this maintenance capability

    Link integrity for the Semantic Web

    No full text
    The usefulness and usability of data on the Semantic Web is ultimately reliant on the ability of clients to retrieve Resource Description Framework (RDF) data from the Web. When RDF data is unavailable clients reliant on that data may either fail to function entirely or behave incorrectly. As a result there is a need to investigate and develop techniques that aim to ensure that some data is still retrievable, even in the event that the primary source of the data is unavailable. Since this problem is essentially the classic link integrity problem from hypermedia and the Web we look at the range of techniques that have been suggested by past research and attempt to adapt these to the Semantic Web.Having studied past research we identified two potentially promising strategies for solving the problem: 1) Replication and Preservation; and 2) Recovery. Using techniques developed to implement these strategies for hypermedia and the Web as a starting point we designed our own implementations which adapted these appropriately for the Semantic Web. We describe the design, implementation and evaluation of our adaptations before going on to discuss the implications of the usage of such techniques. In this research we show that such approaches can be used to successfully apply link integrity to the Semantic Web for a variety of datasets on the Semantic Web but that further research is needed before such solutions can be widely deployed

    Preserving Linked Data Integrity on the Semantic Web by application of techniques from Hypermedia

    No full text
    This report presents a Literature Review of past work in Hypertext link integrity and current work in the emerging area of Semantic Web link integrity. A design and prototype for a system which applies some ideas from Hypertext link integrity to the Semantic Web is presented alongside plans for future enhancements of this system. In addition other possible avenues of research regarding ideas from traditional Hypertext link integrity are briefly discussed

    Preserving Linked Data on the Semantic Web by the application of Link Integrity techniques from Hypermedia

    No full text
    As the Web of Linked Data expands it will become increasingly important to preserve data and links such that the data remains useful. In this work we present a method for locating linked data to preserve which functions even when the URI the user wishes to preserve does not resolve (i.e. is broken/not RDF) and an application for monitoring and preserving the data. This work is based upon the principle of adapting ideas from hypermedia link integrity in order to apply them to the Semantic Web

    Preserving Linked Data Integrity on the Semantic Web by application of techniques from Hypermedia

    No full text
    This paper describes All About That (AAT) an experimental prototype system for preserving linked data integrity on the Semantic Web. Link integrity is an open problem on the traditional web that is largely ignored by many since the power of search engines have made locating missing pages or alternate sources of information trivial. On the linked data web search technology is insufficiently mature to perform the same function with the level of effectiveness possible on the hypermedia web. As the very nature of the Semantic Web is interlinked data it is more important than ever that link integrity be preserved if we are to reliably reason across data and build useful applications. To this end a replication and versioning approach based on ideas from the work of Veiga & Ferreira [1,2] is adopted and applied to the Semantic Web to create an experimental prototype. The prototype permits a user to monitor any number of URIs that they are interested in with the software able to produce versions of how the RDF at that URI appeared on a particular date and detail how the RDF has changed over time. 1. Veiga, L., Ferreira, P.: Repweb: replicated web with referential integrity. In: SAC'03: Proceedings of the 2003 ACM symposium on Applied computing, New York,NY, USA, ACM (2003) 1206-1211 2. Veiga, L., Ferreira, P.: Turning the web into an effective knowledge repository.ICEIS 2004: Software Agents and Internet Computing 14(17) (2004
    corecore